Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations

نویسندگان

چکیده

Deep neural networks (DNNs) with ReLU activation function are proved to be able express viscosity solutions of linear partial integrodifferential equations (PIDEs) on state spaces possibly high dimension d. Admissible PIDEs comprise Kolmogorov for high-dimensional diffusion, advection, and pure jump Lévy processes. We prove such arising from a class jump-diffusions [Formula: see text], that any suitable measure text] there exist constants every the DNN text]-expression error PIDE is size bounded by text]. In particular, constant independent depends only coefficients in used quantify error. This establishes DNNs can break curse dimensionality (CoD short) linear, degenerate corresponding Markovian jump-diffusion As consequence employed techniques, we also obtain expectations large path-dependent functionals underlying processes expressed without CoD.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Overcoming the curse of dimensionality: Solving high-dimensional partial differential equations using deep learning

Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as “the curse of dimensionality”. This paper presents a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated as a control theory prob...

متن کامل

Deep Relaxation: partial differential equations for optimizing deep neural networks

We establish connections between non-convex optimization methods for training deep neural networks (DNNs) and the theory of partial differential equations (PDEs). In particular, we focus on relaxation techniques initially developed in statistical physics, which we show to be solutions of a nonlinear Hamilton-Jacobi-Bellman equation. We employ the underlying stochastic control problem to analyze...

متن کامل

Breaking the Curse of Dimensionality with Convex Neural Networks

We consider neural networks with a single hidden layer and non-decreasing positively homogeneous activation functions like the rectified linear units. By letting the number of hidden units grow unbounded and using classical non-Euclidean regularization tools on the output weights, they lead to a convex optimization problem and we provide a detailed theoretical analysis of their generalization p...

متن کامل

Nonparametric regression using deep neural networks with ReLU activation function

Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constrain...

متن کامل

The curse of dimensionality

In this text, some question related to higher dimensional geometrical spaces will be discussed. The goal is to give the reader a feeling for geometric distortions related to the use of such spaces (e.g. as search spaces).

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Analysis and Applications

سال: 2022

ISSN: ['1793-6861', '0219-5305']

DOI: https://doi.org/10.1142/s0219530522500129